Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

نویسندگان

  • Bernard Ans
  • Stephane Rousset
  • Robert M. French
  • Serban C. Musca
چکیده

While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability to generalize. A realistic and effective system that solves the problem of catastrophic interference in sequential learning of ‘static’ (i.e. non-temporally ordered) patterns has been proposed recently (Robins 1995, Connection Science, 7: 123–146, 1996, Connection Science, 8: 259–275, Ans and Rousset 1997, CR Académie des Sciences Paris, Life Sciences, 320: 989–997, French 1997, Connection Science, 9: 353–379, 1999, Trends in Cognitive Sciences, 3: 128–135, Ans and Rousset 2000, Connection Science, 12: 1–19). The basic principle is to learn new external patterns interleaved with internally generated ‘pseudopatterns’ (generated from random activation) that reflect the previously learned information. However, to be credible, this self-refreshing mechanism for static learning has to encompass our human ability to learn serially many temporal sequences of patterns without catastrophic forgetting. Temporal sequence learning is arguably more important than static pattern learning in the real world. In this paper, we develop a dual-network architecture in which self-generated pseudopatterns reflect (non-temporally) all the sequences of temporally ordered items previously learned. Using these pseudopatterns, several self-refreshing mechanisms that eliminate catastrophic forgetting in sequence learning are described and their efficiency is demonstrated through simulations. Finally, an experiment is presented that evidences a close similarity between human and simulated behaviour.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Artificial neural networks whispering to the brain: nonlinear system attractors induce familiarity with never seen items

Attractors of nonlinear neural systems are at the core of the memory self-refreshing mechanism of human memory models that suppose memories are dynamically maintained in a distributed network [Ans, B., and Rousset, S. (1997), ‘Avoiding Catastrophic Forgetting by Coupling Two Reverberating Neural Networks’ Comptes Rendus de l’Académie des Sciences Paris, Life Sciences, 320, 989–997; Ans, B., and...

متن کامل

Sequential Learning in Distributed Neural Networks without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory Can Do It

− In sequential learning tasks artificial distributed neural networks forget catastrophically, that is, new learned information most often erases the one previously learned. This major weakness is not only cognitively implausible, as human gradually forget, but disastrous for most practical applications. An efficient solution to catastrophic forgetting has been recently proposed for backpropaga...

متن کامل

Self-refreshing Som as a Semantic Memory Model

Natural and artificial cognitive systems suffer from forgetting information. However, in natural systems forgetting is typically gradual whereas in artificial systems forgetting is often catastrophic. Catastrophic forgetting is also a problem for the Self-Organizing Map (SOM) when used as a semantic memory model in a continuous learning task in a nonstationary environment. Methods based on rehe...

متن کامل

Neural networks with a self-refreshing memory: knowledge transfer in sequential learning tasks without catastrophic forgetting

We explore a dual-network architecture with self-refreshing memory (Ans and Rousset 1997) which overcomes catastrophic forgetting in sequential learning tasks. Its principle is that new knowledge is learned along with an internally generated activity re ecting the network history. What mainly distinguishes this model from others using pseudorehearsal in feedforward multilayer networks is a rev...

متن کامل

Dual-Network Memory Model for Temporal Sequences

In neural networks, when new patters are learned by a network, they radically interfere with previously stored patterns. This drawback is called catastrophic forgetting. We have already proposed a biologically inspired dual-network memory model which can much reduce this forgetting for static patterns. In this model, information is first stored in the hippocampal network, and thereafter, it is ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Connect. Sci.

دوره 16  شماره 

صفحات  -

تاریخ انتشار 2004